579 research outputs found

    Application and support for high-performance simulation

    Get PDF
    types: Editorial CommentHigh performance simulation that supports sophisticated simulation experimentation and optimization can require non-trivial amounts of computing power. Advanced distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), grid computing, cloud computing and e-Infrastructures are needed to provide effectively the computing power needed for the high performance simulation of large and complex models. In simulation there has been a long tradition of translating and adopting advances in distributed computing as shown by contributions from the parallel and distributed simulation community. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue is divided into two parts. This first part focuses on research pertaining to high performance simulation that support a range of applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. Compared to other simulation techniques agent-based modeling and simulation is relatively new; however, it is increasingly being used to study large-scale problems. Agent-based simulations present challenges for high performance simulation as they can be complex and computationally demanding, and it is therefore not surprising that this special issue includes several articles on the high performance simulation of such systems.Research Councils U

    High-performance simulation and simulation methodologies

    Get PDF
    types: Editorial CommentThe realization of high performance simulation necessitates sophisticated simulation experimentation and optimization; this often requires non-trivial amounts of computing power. Distributed computing techniques and systems found in areas such as High Performance Computing (HPC), High Throughput Computing (HTC), e-infrastructures, grid and cloud computing can provide the required computing capacity for the execution of large and complex simulations. This extends the long tradition of adopting advances in distributed computing in simulation as evidenced by contributions from the parallel and distributed simulation community. There has arguably been a recent acceleration of innovation in distributed computing tools and techniques. This special issue presents the opportunity to showcase recent research that is assimilating these new advances in simulation. This special issue brings together a contemporary collection of work showcasing original research in the advancement of simulation theory and practice with distributed computing. This special issue has two parts. The first part (published in the preceding issue of the journal) included seven studies in high performance simulation that support applications including the study of epidemics, social networks, urban mobility and real-time embedded and cyber-physical systems. This second part focuses on original research in high performance simulation that supports a range of methods including DEVS, Petri nets and DES. Of the four papers for this issue, the manuscript by Bergero, et al. (2013), which was submitted, reviewed and accepted for the special issue, was published in an earlier issue of SIMULATION as the author requested early publication.Research Councils U

    Service-oriented simulation using web ontology

    Get PDF
    Copyright © 2012 Inderscience Enterprises Ltd.Commercial-off-the-Shelf (COTS) Simulation Packages (CSPs) have proved popular in a wider industrial setting. Reuse of Simulation Component (SC) models by collaborating organisations or divisions is restricted, however, by the same semantic issues that restrict the inter-organisation use of other software services. Semantic models, in the form of ontology, utilised by a web-service-based discovery and deployment architecture provide one approach to support simulation model reuse. Semantic interoperation is achieved using domain-grounded SC ontology to identify reusable components and subsequently loaded into a CSP, and locally or remotely executed. The work is based on a health service simulation that addresses the transportation of blood. The ontology-engineering framework and discovery architecture provide a novel approach to inter-organisation simulation, uncovering domain semantics and providing a less intrusive mechanism for component reuse. The resulting web of component models and simulation execution environments present a nascent approach to simulation grids

    Facilitating the analysis of a UK national blood service supply chain using distributed simulation

    Get PDF
    In an attempt to investigate blood unit ordering policies, researchers have created a discrete-event model of the UK National Blood Service (NBS) supply chain in the Southampton area of the UK. The model has been created using Simul8, a commercial-off-the-shelf discrete-event simulation package (CSP). However, as more hospitals were added to the model, it was discovered that the length of time needed to perform a single simulation severely increased. It has been claimed that distributed simulation, a technique that uses the resources of many computers to execute a simulation model, can reduce simulation runtime. Further, an emerging standardized approach exists that supports distributed simulation with CSPs. These CSP Interoperability (CSPI) standards are compatible with the IEEE 1516 standard The High Level Architecture, the defacto interoperability standard for distributed simulation. To investigate if distributed simulation can reduce the execution time of NBS supply chain simulation, this paper presents experiences of creating a distributed version of the CSP Simul8 according to the CSPI/HLA standards. It shows that the distributed version of the simulation does indeed run faster when the model reaches a certain size. Further, we argue that understanding the relationship of model features is key to performance. This is illustrated by experimentation with two different protocols implementations (using Time Advance Request (TAR) and Next Event Request (NER)). Our contribution is therefore the demonstration that distributed simulation is a useful technique in the timely execution of supply chains of this type and that careful analysis of model features can further increase performance

    High Speed Simulation Analytics

    Get PDF
    Simulation, especially Discrete-event simulation (DES) and Agent-based simulation (ABS), is widely used in industry to support decision making. It is used to create predictive models or Digital Twins of systems used to analyse what-if scenarios, perform sensitivity analytics on data and decisions and even to optimise the impact of decisions. Simulation-based Analytics, or just Simulation Analytics, therefore has a major role to play in Industry 4.0. However, a major issue in Simulation Analytics is speed. Extensive, continuous experimentation demanded by Industry 4.0 can take a significant time, especially if many replications are required. This is compounded by detailed models as these can take a long time to simulate. Distributed Simulation (DS) techniques use multiple computers to either speed up the simulation of a single model by splitting it across the computers and/or to speed up experimentation by running experiments across multiple computers in parallel. This chapter discusses how DS and Simulation Analytics, as well as concepts from contemporary e-Science, can be combined to contribute to the speed problem by creating a new approach called High Speed Simulation Analytics. We present a vision of High Speed Simulation Analytics to show how this might be integrated with the future of Industry 4.0

    Spin fluctuations in the quasi-two dimensional Heisenberg ferromagnet GdI_2 studied by Electron Spin Resonance

    Full text link
    The spin dynamics of GdI_2 have been investigated by ESR spectroscopy. The temperature dependences of the resonance field and ESR intensity are well described by the model for the spin susceptibility proposed by Eremin et al. [Phys. Rev. B 64, 064425 (2001)]. The temperature dependence of the resonance linewidth shows a maximum similar to the electrical resistance and is discussed in terms of scattering processes between conduction electrons and localized spins.Comment: to be published in PR

    A cloud-agnostic queuing system to support the implementation of deadline-based application execution policies

    Get PDF
    There are many scientific and commercial applications that require the execution of a large number of independent jobs resulting in significant overall execution time. Therefore, such applications typically require distributed computing infrastructures and science gateways to run efficiently and to be easily accessible for end-users. Optimising the execution of such applications in a cloud computing environment by keeping resource utilisation at minimum but still completing the experiment by a set deadline has paramount importance. As container-based technologies are becoming more widespread, support for job-queuing and auto-scaling in such environments is becoming important. Current container management technologies, such as Docker Swarm or Kubernetes, while provide auto-scaling based on resource consumption, do not support job queuing and deadline-based execution policies directly. This paper presents JQueuer, a cloud-agnostic queuing system that supports the scheduling of a large number of jobs in containerised cloud environments. The paper also demonstrates how JQueuer, when integrated with a cloud application-level orchestrator and auto-scaling framework, called MiCADO, can be used to implement deadline-based execution policies. This novel technical solution provides an important step towards the cost-optimisation of batch processing and job submission applications. In order to test and prove the effectiveness of the solution, the paper presents experimental results when executing an agent-based simulation application using the open source REPAST simulation framework

    The Galactic Halo in Mixed Dark Matter Cosmologies

    Full text link
    A possible solution to the small scale problems of the cold dark matter (CDM) scenario is that the dark matter consists of two components, a cold and a warm one. We perform a set of high resolution simulations of the Milky Way halo varying the mass of the WDM particle (mWDMm_{\rm WDM}) and the cosmic dark matter mass fraction in the WDM component (fˉW\bar{f}_{\rm W}). The scaling ansatz introduced in combined analysis of LHC and astroparticle searches postulates that the relative contribution of each dark matter component is the same locally as on average in the Universe (e.g. fW,=fˉWf_{\rm W,\odot} = \bar{f}_{\rm W}). Here we find however, that the normalised local WDM fraction (fW,f_{\rm W,\odot} / fˉW\bar{f}_{\rm W}) depends strongly on mWDMm_{\rm WDM} for mWDM<m_{\rm WDM} < 1 keV. Using the scaling ansatz can therefore introduce significant errors into the interpretation of dark matter searches. To correct this issue a simple formula that fits the local dark matter densities of each component is provided.Comment: 19 pages, 10 figures, accepted for publication in JCA

    Limited health literacy is associated with reduced access to kidney transplantation

    Get PDF
    Limited health literacy is common in patients with chronic kidney disease (CKD) and has been variably associated with adverse clinical outcomes. The prevalence of limited health literacy is lower in kidney transplant recipients than in individuals starting dialysis, suggesting selection of patients with higher health literacy for transplantation. We investigated the relationship between limited health literacy and clinical outcomes, including access to kidney transplantation, in a prospective UK cohort study of 2,274 incident dialysis patients aged 18-75 years. Limited health literacy was defined by a validated Single Item Literacy Screener (SILS). Multivariable regression was used to test for association with outcomes after adjusting for age, sex, socioeconomic status (educational level and car ownership), ethnicity, first language, primary renal diagnosis, and comorbidity. In fully adjusted analyses, limited health literacy was not associated with mortality, late presentation to nephrology, dialysis modality, haemodialysis vascular access, or pre-emptive kidney transplant listing, but was associated with reduced likelihood of listing for a deceased-donor transplant (hazard ratio [HR] 0.68; 95% confidence interval [CI] 0.51-0.90), receiving a living-donor transplant (HR 0.41; 95% CI 0.19-0.88), or receiving a transplant from any donor type (HR 0.65; 95% CI 0.44-0.96). Limited health literacy is associated with reduced access to kidney transplantation, independent of patient demographics, socioeconomic status, and comorbidity. Interventions to ameliorate the effects of low health literacy may improve access to kidney transplantation
    corecore